Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization

نویسندگان

چکیده

Abstract In this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second-order methods. At each iteration, subspace of limited dimension size is constructed using first-order information from previous iterations, an efficient Newton deployed to find approximate minimizer within subspace. With properly selected as small two, the proposed algorithm achieves optimal convergence rates methods while remaining descent method, it also possesses fast speed on nonconvex problems. Since major operations our are dense matrix-matrix operations, can be efficiently parallelized in multicore environments even sparse By wisely utilizing historical information, communication-efficient distributed uses multiple machines steps calculated with little communication. Numerical study shows has superior empirical performance real-world large-scale machine learning

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Limited-memory Common-directions Method for Distributed Optimization and its Application on Empirical Risk Minimization

Distributed optimization has become an important research topic for dealing with extremely large volume of data available in the Internet companies nowadays. Additional machines make computation less expensive, but inter-machine communication becomes prominent in the optimization process, and efficient optimization methods should reduce the amount of the communication in order to achieve shorte...

متن کامل

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

On the limited memory BFGS method for large scale optimization

We study the numerical performance of a limited memory quasi Newton method for large scale optimization which we call the L BFGS method We compare its performance with that of the method developed by Buckley and LeNir which combines cyles of BFGS steps and conjugate direction steps Our numerical tests indicate that the L BFGS method is faster than the method of Buckley and LeNir and is better a...

متن کامل

Supplement of “ Limited-memory Common-directions Method for Distributed Optimization and its Application on Empirical Risk Minimization”

II More Experiments We present more experimental results that are not included in the main paper in this section. We consider the same experiment environment, and the same problem being solved. We present the results using different values of C to see the relative efficiency when the problems become more difficult or easier. The result of C = 10−3 is shown in Figure (I), and the result of C = 1...

متن کامل

a limited memory adaptive trust-region approach for large-scale unconstrained optimization

this study concerns with a trust-region-based method for solving unconstrained optimization problems. the approach takes the advantages of the compact limited memory bfgs updating formula together with an appropriate adaptive radius strategy. in our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-newt...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming Computation

سال: 2022

ISSN: ['1867-2957', '1867-2949']

DOI: https://doi.org/10.1007/s12532-022-00219-z